394 research outputs found

    Learning to Count Isomorphisms with Graph Neural Networks

    Full text link
    Subgraph isomorphism counting is an important problem on graphs, as many graph-based tasks exploit recurring subgraph patterns. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational costs. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query and input graphs, in order to predict the number of subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes, which is inadequate in complex structure matching for isomorphism counting. Moreover, on an input graph, the space of possible query graphs is enormous, and different parts of the input graph will be triggered to match different queries. Thus, expecting a fixed representation of the input graph to match diversely structured query graphs is unrealistic. In this paper, we propose a novel GNN called Count-GNN for subgraph isomorphism counting, to deal with the above challenges. At the edge level, given that an edge is an atomic unit of encoding graph structures, we propose an edge-centric message passing scheme, where messages on edges are propagated and aggregated based on the edge adjacency to preserve fine-grained structural information. At the graph level, we modulate the input graph representation conditioned on the query, so that the input graph can be adapted to each query individually to improve their matching. Finally, we conduct extensive experiments on a number of benchmark datasets to demonstrate the superior performance of Count-GNN.Comment: AAAI-23 main trac

    BuffGraph: Enhancing Class-Imbalanced Node Classification via Buffer Nodes

    Full text link
    Class imbalance in graph-structured data, where minor classes are significantly underrepresented, poses a critical challenge for Graph Neural Networks (GNNs). To address this challenge, existing studies generally generate new minority nodes and edges connecting new nodes to the original graph to make classes balanced. However, they do not solve the problem that majority classes still propagate information to minority nodes by edges in the original graph which introduces bias towards majority classes. To address this, we introduce BuffGraph, which inserts buffer nodes into the graph, modulating the impact of majority classes to improve minor class representation. Our extensive experiments across diverse real-world datasets empirically demonstrate that BuffGraph outperforms existing baseline methods in class-imbalanced node classification in both natural settings and imbalanced settings. Code is available at https://anonymous.4open.science/r/BuffGraph-730A

    HGPROMPT: Bridging Homogeneous and Heterogeneous Graphs for Few-shot Prompt Learning

    Full text link
    Graph neural networks (GNNs) and heterogeneous graph neural networks (HGNNs) are prominent techniques for homogeneous and heterogeneous graph representation learning, yet their performance in an end-to-end supervised framework greatly depends on the availability of task-specific supervision. To reduce the labeling cost, pre-training on self-supervised pretext tasks has become a popular paradigm,but there is often a gap between the pre-trained model and downstream tasks, stemming from the divergence in their objectives. To bridge the gap, prompt learning has risen as a promising direction especially in few-shot settings, without the need to fully fine-tune the pre-trained model. While there has been some early exploration of prompt-based learning on graphs, they primarily deal with homogeneous graphs, ignoring the heterogeneous graphs that are prevalent in downstream applications. In this paper, we propose HGPROMPT, a novel pre-training and prompting framework to unify not only pre-training and downstream tasks but also homogeneous and heterogeneous graphs via a dual-template design. Moreover, we propose dual-prompt in HGPROMPT to assist a downstream task in locating the most relevant prior to bridge the gaps caused by not only feature variations but also heterogeneity differences across tasks. Finally, we thoroughly evaluate and analyze HGPROMPT through extensive experiments on three public datasets.Comment: Accepted by AAAI202

    ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt

    Full text link
    Recent research has demonstrated the efficacy of pre-training graph neural networks (GNNs) to capture the transferable graph semantics and enhance the performance of various downstream tasks. However, the semantic knowledge learned from pretext tasks might be unrelated to the downstream task, leading to a semantic gap that limits the application of graph pre-training. To reduce this gap, traditional approaches propose hybrid pre-training to combine various pretext tasks together in a multi-task learning fashion and learn multi-grained knowledge, which, however, cannot distinguish tasks and results in some transferable task-specific knowledge distortion by each other. Moreover, most GNNs cannot distinguish nodes located in different parts of the graph, making them fail to learn position-specific knowledge and lead to suboptimal performance. In this work, inspired by the prompt-based tuning in natural language processing, we propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs through a prompt mechanism, namely multi-task graph dual prompt (ULTRA-DP). Based on this framework, we propose a prompt-based transferability test to find the most relevant pretext task in order to reduce the semantic gap. To implement the hybrid pre-training tasks, beyond the classical edge prediction task (node-node level), we further propose a novel pre-training paradigm based on a group of kk-nearest neighbors (node-group level). The combination of them across different scales is able to comprehensively express more structural semantics and derive richer multi-grained knowledge. Extensive experiments show that our proposed ULTRA-DP can significantly enhance the performance of hybrid pre-training methods and show the generalizability to other pre-training tasks and backbone architectures

    Evaluation of Biogas and Solar Energy Coupling on Phase-Change Energy-Storage Heating Systems: Optimization of Supply and Demand Coordination

    Get PDF
    Biogas heating plays a crucial role in the transition to clean energy and the mitigation of agricultural pollution. To address the issue of low biogas production during winter, the implementation of a multi-energy complementary system has become essential for ensuring heating stability. To guarantee the economy, stability, and energy-saving operation of the heating system, this study proposes coupling biogas and solar energy with a phase-change energy-storage heating system. The mathematical model of the heating system was developed, taking an office building in Xilin Hot, Inner Mongolia (43.96000° N, 116.03000° E) as a case study. Additionally, the Sparrow Search Algorithm (SSA) was employed to determine equipment selection and optimize the dynamic operation strategy, considering the minimum cost and the balance between the supply and demand of the building load. The operating economy was evaluated using metrics such as payback period, load ratio, and daily rate of return. The results demonstrate that the multi-energy complementary heating system, with a balanced supply and demand, yields significant economic benefits compared to the central heating system, with a payback period of 4.15 years and a daily return rate of 32.97% under the most unfavorable working conditions. Moreover, the development of a daily optimization strategy holds practical engineering significance, and the optimal scheduling of the multi-energy complementary system, with a balance of supply and demand, is realized

    A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and Future Directions

    Full text link
    Graphs represent interconnected structures prevalent in a myriad of real-world scenarios. Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data, underpinning various tasks including node classification and link prediction. However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce, thereby leading to biased learning outcomes. This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes. In this survey, we embark on a comprehensive review of the literature on imbalanced learning on graphs. We begin by providing a definitive understanding of the concept and related terminologies, establishing a strong foundational understanding for readers. Following this, we propose two comprehensive taxonomies: (1) the problem taxonomy, which describes the forms of imbalance we consider, the associated tasks, and potential solutions; (2) the technique taxonomy, which details key strategies for addressing these imbalances, and aids readers in their method selection process. Finally, we suggest prospective future directions for both problems and techniques within the sphere of imbalanced learning on graphs, fostering further innovation in this critical area.Comment: The collection of awesome literature on imbalanced learning on graphs: https://github.com/Xtra-Computing/Awesome-Literature-ILoG

    A Study of Wolf Pack Algorithm for Test Suite Reduction

    Get PDF
    Modern smart meter programs are iterating at an ever-increasing rate, placing higher demands on the software testing of smart meters. How to reduce the cost of software testing has become a focus of current research. The reduction of test overhead is the most intuitive way to reduce the cost of software testing. Test suite reduction is one of the necessary means to reduce test overhead. This paper proposes a smart meter test suite reduction technique based on Wolf Pack Algorithm. First, the algorithm uses the binary optimization set coverage problem to represent the test suite reduction of the smart meter program; then, the Wolf Pack Algorithm is improved by converting the positions of individual wolves into a 0/1 matrix; finally, the optimal test case subset is obtained by iteration. By simulating different smart meter programs and different size test suites, the experimental result shows that the Wolf Pack Algorithm achieves better results compared to similar algorithms in terms of the percentage of obtaining both the optimal solution and the optimal subset of test overhead
    corecore